39 research outputs found
Time for dithering: fast and quantized random embeddings via the restricted isometry property
Recently, many works have focused on the characterization of non-linear
dimensionality reduction methods obtained by quantizing linear embeddings,
e.g., to reach fast processing time, efficient data compression procedures,
novel geometry-preserving embeddings or to estimate the information/bits stored
in this reduced data representation. In this work, we prove that many linear
maps known to respect the restricted isometry property (RIP) can induce a
quantized random embedding with controllable multiplicative and additive
distortions with respect to the pairwise distances of the data points beings
considered. In other words, linear matrices having fast matrix-vector
multiplication algorithms (e.g., based on partial Fourier ensembles or on the
adjacency matrix of unbalanced expanders) can be readily used in the definition
of fast quantized embeddings with small distortions. This implication is made
possible by applying right after the linear map an additive and random "dither"
that stabilizes the impact of the uniform scalar quantization operator applied
afterwards. For different categories of RIP matrices, i.e., for different
linear embeddings of a metric space
in with , we derive upper bounds on the
additive distortion induced by quantization, showing that it decays either when
the embedding dimension increases or when the distance of a pair of
embedded vectors in decreases. Finally, we develop a novel
"bi-dithered" quantization scheme, which allows for a reduced distortion that
decreases when the embedding dimension grows and independently of the
considered pair of vectors.Comment: Keywords: random projections, non-linear embeddings, quantization,
dither, restricted isometry property, dimensionality reduction, compressive
sensing, low-complexity signal models, fast and structured sensing matrices,
quantized rank-one projections (31 pages
Caratterizzazione e generazione di segnali PWM per amplificatori in classe D ad alta efficienza
The convergence of information technology and consumer electronics towards battery powered portable devices has increased the interest in high efficiency, low dissipation amplifiers. Class D amplifiers are the state of the art in low power consumption and high performance amplification.
In this thesis we explore the possibility of exploiting nonlinearities introduced by the PWM modulation, by designing an optimized modulation law which scales its carrier frequency adaptively with the input signal's average power while preserving the SNR, thus reducing power consumption.
This is achieved by means of a novel analytical model of the PWM output spectrum, which shows how interfering harmonics and their bandwidth affect the spectrum. This allows for frequency scaling with negligible aliasing between the baseband spectrum and its harmonics.
We performed low noise power spectrum measurements on PWM modulations generated by comparing variable bandwidth, random test signals with a variable frequency triangular wave carrier. The experimental results show that power-optimized frequency scaling is both feasible and effective.
The new analytical model also suggests a new PWM architecture that can be applied to digitally encoded input signals which are predistorted and compared with a cosine carrier, which is accurately synthesized by a digital oscillator. This approach has been simulated in a realistic noisy model and tested in our measurement setup.
A zero crossing search on the obtained PWM modulation law proves that this approach yields an equivalent signal quality with respect to traditional PWM schemes, while entailing the use of signals whose bandwidth is remarkably smaller due to the use of a cosine instead of a triangular carrier
On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A Quantitative Analysis
Despite the linearity of its encoding, compressed sensing may be used to
provide a limited form of data protection when random encoding matrices are
used to produce sets of low-dimensional measurements (ciphertexts). In this
paper we quantify by theoretical means the resistance of the least complex form
of this kind of encoding against known-plaintext attacks. For both standard
compressed sensing with antipodal random matrices and recent multiclass
encryption schemes based on it, we show how the number of candidate encoding
matrices that match a typical plaintext-ciphertext pair is so large that the
search for the true encoding matrix inconclusive. Such results on the practical
ineffectiveness of known-plaintext attacks underlie the fact that even
closely-related signal recovery under encoding matrix uncertainty is doomed to
fail.
Practical attacks are then exemplified by applying compressed sensing with
antipodal random matrices as a multiclass encryption scheme to signals such as
images and electrocardiographic tracks, showing that the extracted information
on the true encoding matrix from a plaintext-ciphertext pair leads to no
significant signal recovery quality increase. This theoretical and empirical
evidence clarifies that, although not perfectly secure, both standard
compressed sensing and multiclass encryption schemes feature a noteworthy level
of security against known-plaintext attacks, therefore increasing its appeal as
a negligible-cost encryption method for resource-limited sensing applications.Comment: IEEE Transactions on Information Forensics and Security, accepted for
publication. Article in pres
Low-complexity Multiclass Encryption by Compressed Sensing
The idea that compressed sensing may be used to encrypt information from
unauthorised receivers has already been envisioned, but never explored in depth
since its security may seem compromised by the linearity of its encoding
process. In this paper we apply this simple encoding to define a general
private-key encryption scheme in which a transmitter distributes the same
encoded measurements to receivers of different classes, which are provided
partially corrupted encoding matrices and are thus allowed to decode the
acquired signal at provably different levels of recovery quality.
The security properties of this scheme are thoroughly analysed: firstly, the
properties of our multiclass encryption are theoretically investigated by
deriving performance bounds on the recovery quality attained by lower-class
receivers with respect to high-class ones. Then we perform a statistical
analysis of the measurements to show that, although not perfectly secure,
compressed sensing grants some level of security that comes at almost-zero cost
and thus may benefit resource-limited applications.
In addition to this we report some exemplary applications of multiclass
encryption by compressed sensing of speech signals, electrocardiographic tracks
and images, in which quality degradation is quantified as the impossibility of
some feature extraction algorithms to obtain sensitive information from
suitably degraded signal recoveries.Comment: IEEE Transactions on Signal Processing, accepted for publication.
Article in pres
Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing
This paper focuses on the estimation of low-complexity signals when they are
observed through uniformly quantized compressive observations. Among such
signals, we consider 1-D sparse vectors, low-rank matrices, or compressible
signals that are well approximated by one of these two models. In this context,
we prove the estimation efficiency of a variant of Basis Pursuit Denoise,
called Consistent Basis Pursuit (CoBP), enforcing consistency between the
observations and the re-observed estimate, while promoting its low-complexity
nature. We show that the reconstruction error of CoBP decays like
when all parameters but are fixed. Our proof is connected to recent bounds
on the proximity of vectors or matrices when (i) those belong to a set of small
intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they
share the same quantized (dithered) random projections. By solving CoBP with a
proximal algorithm, we provide some extensive numerical observations that
confirm the theoretical bound as is increased, displaying even faster error
decay than predicted. The same phenomenon is observed in the special, yet
important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency,
error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this
version: title change, typo corrections, clarification of the context, adding
a comparison with BPD
Metodi Matriciali per l'Acquisizione Efficiente e la Crittografia di Segnali in Forma Compressa
The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives.
Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware.
Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content.
Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager
Generalized Inpainting Method for Hyperspectral Image Acquisition
A recently designed hyperspectral imaging device enables multiplexed
acquisition of an entire data volume in a single snapshot thanks to
monolithically-integrated spectral filters. Such an agile imaging technique
comes at the cost of a reduced spatial resolution and the need for a
demosaicing procedure on its interleaved data. In this work, we address both
issues and propose an approach inspired by recent developments in compressed
sensing and analysis sparse models. We formulate our superresolution and
demosaicing task as a 3-D generalized inpainting problem. Interestingly, the
target spatial resolution can be adjusted for mitigating the compression level
of our sensing. The reconstruction procedure uses a fast greedy method called
Pseudo-inverse IHT. We also show on simulations that a random arrangement of
the spectral filters on the sensor is preferable to regular mosaic layout as it
improves the quality of the reconstruction. The efficiency of our technique is
demonstrated through numerical experiments on both synthetic and real data as
acquired by the snapshot imager.Comment: Keywords: Hyperspectral, inpainting, iterative hard thresholding,
sparse models, CMOS, Fabry-P\'ero
Through the Haze: a Non-Convex Approach to Blind Gain Calibration for Linear Random Sensing Models
Computational sensing strategies often suffer from calibration errors in the physical implementation of their ideal sensing models. Such uncertainties are typically addressed by using multiple, accurately chosen training signals to recover the missing information on the sensing model, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not employ any training signal, but corresponds to a bilinear inverse problem whose algorithmic solution is an open issue. We here address blind calibration as a non-convex problem for linear random sensing models, in which we aim to recover an unknown signal from its projections on sub-Gaussian random vectors, each subject to an unknown positive multiplicative factor (or gain). To solve this optimisation problem we resort to projected gradient descent starting from a suitable, carefully chosen initialisation point. An analysis of this algorithm allows us to show that it converges to the exact solution provided a sample complexity requirement is met, i.e., relating convergence to the amount of information collected during the sensing process. Interestingly, we show that this requirement grows linearly (up to log factors) in the number of unknowns of the problem. This sample complexity is found both in absence of prior information, as well as when subspace priors are available for both the signal and gains, allowing a further reduction of the number of observations required for our recovery guarantees to hold. Moreover, in the presence of noise we show how our descent algorithm yields a solution whose accuracy degrades gracefully with the amount of noise affecting the measurements. Finally, we present some numerical experiments in an imaging context, where our algorithm allows for a simple solution to blind calibration of the gains in a sensor array